Goto

Collaborating Authors

 state university


'I wish I could push ChatGPT off a cliff': professors scramble to save critical thinking in an age of AI

The Guardian

'I wish I could push ChatGPT off a cliff': professors scramble to save critical thinking in an age of AI Lea Pao, a professor of literature at Stanford University, has been experimenting with ways to get her students to learn offline. She has them memorize poems, perform at recitation events, look at art in the real world. It's an effort to reconnect them to the bodily experience of learning, she said, and to keep them from turning to artificial intelligence to do the work for them. "There's no AI-proof anything," Pao said. "Rather than policing it, I hope that their overall experiences in this class will show them that there's a way out."




More than half of new articles on the internet are being written by AI

AIHub

The line between human and machine authorship is blurring, particularly as it's become increasingly difficult to tell whether something was written by a person or AI. Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence. As a scholar who explores how AI is built, how people are using it in their everyday lives, and how it's affecting culture, I've thought a lot about what this technology can do and where it falls short. If you're more likely to read something written by AI than by a human on the internet, is it only a matter of time before human writing becomes obsolete? Or is this simply another technological development that humans will adapt to?


Justice in Judgment: Unveiling (Hidden) Bias in LLM-assisted Peer Reviews

Vasu, Sai Suresh Macharla, Sheth, Ivaxi, Wang, Hui-Po, Binkyte, Ruta, Fritz, Mario

arXiv.org Artificial Intelligence

The adoption of large language models (LLMs) is transforming the peer review process, from assisting reviewers in writing more detailed evaluations to generating entire reviews automatically. While these capabilities offer exciting opportunities, they also raise critical concerns about fairness and reliability. In this paper, we investigate bias in LLM-generated peer reviews by conducting controlled experiments on sensitive metadata, including author affiliation and gender. Our analysis consistently shows affiliation bias favoring institutions highly ranked on common academic rankings. Additionally, we find some gender preferences, which, even though subtle in magnitude, have the potential to compound over time. Notably, we uncover implicit biases that become more evident with token-based soft ratings.


Deep learning-based automated damage detection in concrete structures using images from earthquake events

Turer, Abdullah, Bai, Yongsheng, Sezen, Halil, Yilmaz, Alper

arXiv.org Artificial Intelligence

Timely assessment of integrity of structures after seismic events is crucial for public safety and emergency response. This study focuses on assessing the structural damage conditions using deep learning methods to detect exposed steel reinforcement in concrete buildings and bridges after large earthquakes. Steel bars are typically exposed after concrete spalling or large flexural or shear cracks. The amount and distribution of exposed steel reinforcement is an indication of structural damage and degradation. To automatically detect exposed steel bars, new datasets of images collected after the 2023 Turkey Earthquakes were labeled to represent a wide variety of damaged concrete structures. The proposed method builds upon a deep learning framework, enhanced with fine-tuning, data augmentation, and testing on public datasets. An automated classification framework is developed that can be used to identify inside/outside buildings and structural components. Then, a YOLOv11 (You Only Look Once) model is trained to detect cracking and spalling damage and exposed bars. Another YOLO model is finetuned to distinguish different categories of structural damage levels. All these trained models are used to create a hybrid framework to automatically and reliably determine the damage levels from input images. This research demonstrates that rapid and automated damage detection following disasters is achievable across diverse damage contexts by utilizing image data collection, annotation, and deep learning approaches.


AI may help you pick the perfect avocado

Popular Science

A new program trained on iPhone photos could curb food waste. Avocados have a carbon footprint that is three times higher than bananas. Breakthroughs, discoveries, and DIY tips sent every weekday. The days of buying a rock-tough avocado in the hopes of avoiding mushy food waste may soon be over. Machine learning researchers at Oregon State University (OSU) recently designed an artificial intelligence program that visually assesses avocado quality and ripeness .


Unsupervised Outlier Detection in Audit Analytics: A Case Study Using USA Spending Data

Li, Buhe, Kaplan, Berkay, Lazirko, Maksym, Kogan, Aleksandr

arXiv.org Artificial Intelligence

This study investigates the effectiveness of unsupervised outlier detection methods in audit analytics, utilizing USA spending data from the U.S. Department of Health and Human Services (DHHS) as a case example. We employ and compare multiple outlier detection algorithms, including Histogram-based Outlier Score (HBOS), Robust Principal Component Analysis (PCA), Minimum Covariance Determinant (MCD), and K-Nearest Neighbors (KNN) to identify anomalies in federal spending patterns. The research addresses the growing need for efficient and accurate anomaly detection in large-scale governmental datasets, where traditional auditing methods may fall short. Our methodology involves data preparation, algorithm implementation, and performance evaluation using precision, recall, and F1 scores. Results indicate that a hybrid approach, combining multiple detection strategies, enhances the robustness and accuracy of outlier identification in complex financial data. This study contributes to the field of audit analytics by providing insights into the comparative effectiveness of various outlier detection models and demonstrating the potential of unsupervised learning techniques in improving audit quality and efficiency. The findings have implications for auditors, policymakers, and researchers seeking to leverage advanced analytics in governmental financial oversight and risk management.


Generative AI in clinical practice: novel qualitative evidence of risk and responsible use of Google's NotebookLM

Reuter, Max, Philippone, Maura, Benton, Bond, Dilley, Laura

arXiv.org Artificial Intelligence

Figure 1 presents examples of NotebookLM's shortcomings Importantly, using NotebookLM to educate medical professionals presently risks of misleading them, as NotebookLM's lack Inaccurate responses given by NotebookLM to user queries; output is stylized for visual clarity. NotebookLM advises the user to tell their patients that eating rocks is healthy, citing the user's document. Passages from Dihan et al. advocating for use of NotebookLM (Column 1) which are associated with clinical and/or ethical concerns "Though NotebookLM is a commercial entity that does not abide by patient privacy regulations, it does represent an " A podcast generator can improve the way Given any set of documents, and especially those containing complex documents, LLMs may misinterpret and subsequently misrepresent some of their contents. "Rather than requiring active visual engagement through reading, podcasts allow NotebookLM can neither identify misinformation contained within uploaded files nor incorporate relevant information beyond the uploaded content. "[NotebookLM's] citations are automatically generated for all content that NotebookLM pulls from within these materials, No funding was received for the publication of this article.


On the clustering behavior of sliding windows

Alexeev, Boris, Luo, Wenyan, Mixon, Dustin G., Zhang, Yan X

arXiv.org Artificial Intelligence

Clustering is one of the most common tasks in data science, and given the ubiquity of timeseries data, one is naturally inclined to cluster it. In order to perform Euclidean clustering (such as k-means clustering) on timeseries data, one must first map the data into Euclidean space. This is traditionally accomplished with a sliding window.